33 research outputs found

    Compilation for QCSP

    Get PDF
    We propose in this article a framework for compilation of quantified constraint satisfaction problems (QCSP). We establish the semantics of this formalism by an interpretation to a QCSP. We specify an algorithm to compile a QCSP embedded into a search algorithm and based on the inductive semantics of QCSP. We introduce an optimality property and demonstrate the optimality of the interpretation of the compiled QCSP.Comment: Proceedings of the 13th International Colloquium on Implementation of Constraint LOgic Programming Systems (CICLOPS 2013), Istanbul, Turkey, August 25, 201

    Scaling-up Empirical Risk Minimization: Optimization of Incomplete U-statistics

    Get PDF
    In a wide range of statistical learning problems such as ranking, clustering or metric learning among others, the risk is accurately estimated by UU-statistics of degree d1d\geq 1, i.e. functionals of the training data with low variance that take the form of averages over kk-tuples. From a computational perspective, the calculation of such statistics is highly expensive even for a moderate sample size nn, as it requires averaging O(nd)O(n^d) terms. This makes learning procedures relying on the optimization of such data functionals hardly feasible in practice. It is the major goal of this paper to show that, strikingly, such empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n)O(n) terms only, usually referred to as incomplete UU-statistics, without damaging the OP(1/n)O_{\mathbb{P}}(1/\sqrt{n}) learning rate of Empirical Risk Minimization (ERM) procedures. For this purpose, we establish uniform deviation results describing the error made when approximating a UU-process by its incomplete version under appropriate complexity assumptions. Extensions to model selection, fast rate situations and various sampling techniques are also considered, as well as an application to stochastic gradient descent for ERM. Finally, numerical examples are displayed in order to provide strong empirical evidence that the approach we promote largely surpasses more naive subsampling techniques.Comment: To appear in Journal of Machine Learning Research. 34 pages. v2: minor correction to Theorem 4 and its proof, added 1 reference. v3: typo corrected in Proposition 3. v4: improved presentation, added experiments on model selection for clustering, fixed minor typo

    Extending Gossip Algorithms to Distributed Estimation of U-Statistics

    Get PDF
    Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of UU-statistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the UU-statistic of interest. We establish convergence rate bounds of O(1/t)O(1/t) and O(logt/t)O(\log t / t) for the synchronous and asynchronous cases respectively, where tt is the number of iterations, with explicit data and network dependent terms. Beyond favorable comparisons in terms of rate analysis, numerical experiments provide empirical evidence the proposed algorithms surpasses the previously introduced approach.Comment: to be presented at NIPS 201

    Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions

    Get PDF
    In decentralized networks (of sensors, connected objects, etc.), there is an important need for efficient algorithms to optimize a global cost function, for instance to learn a global model from the local data collected by each computing unit. In this paper, we address the problem of decentralized minimization of pairwise functions of the data points, where these points are distributed over the nodes of a graph defining the communication topology of the network. This general problem finds applications in ranking, distance metric learning and graph inference, among others. We propose new gossip algorithms based on dual averaging which aims at solving such problems both in synchronous and asynchronous settings. The proposed framework is flexible enough to deal with constrained and regularized variants of the optimization problem. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. We present numerical simulations on Area Under the ROC Curve (AUC) maximization and metric learning problems which illustrate the practical interest of our approach

    A Semantic Characterization for ASP Base Revision

    Get PDF
    International audienceThe paper deals with base revision for Answer Set Programming (ASP). Base revision in classical logic is done by the removal of formulas. Exploiting the non-monotonicity of ASP allows one to propose other revision strategies, namely addition strategy or removal and/or addition strategy. These strategies allow one to define families of rule-based revision operators. The paper presents a semantic characterization of these families of revision operators in terms of answer sets. This semantic characterization allows for equivalently considering the evolution of syntactic logic programs and the evolution of their semantic content. It then studies the logical properties of the proposed operators and gives complexity results

    Computing Query Answering With Non-Monotonic Rules: A Case Study of Archaeology Qualitative Spatial Reasoning

    Get PDF
    International audienceThis paper deals with querying ontology-based knowledge bases equipped with non-monotonic rules through a case study within the framework of Cultural Heritage. It focuses on 3D underwater surveys on the Xlendi wreck which is represented by an OWL2 knowledge base with a large dataset. The paper aims at improving the interactions between the archaeologists and the knowledge base providing new queries that involve non-monotonic rules in order to perform qualitative spatial reasoning. To this end, the knowledge base initially represented in OWL2-QL is translated into an equivalent Answer Set Programming (ASP) program and is enriched with a set of non-monotonic ASP rules suitable to express default and exceptions. An ASP query answering approach is proposed and implemented. Furthermore due to the increased expressiveness of non-monotonic rules it provides spatial reasoning and spatial relations between artifacts query answering which is not possible with query answering languages such as SPARQL and SQWRL

    Algorithmes d'élimination de quantificateurs pour le calcul des politiques des formules booléennes quantifées

    No full text
    http://www710.univ-lyon1.fr/~csolnonLe problème de validité d'une formule booléenne quantifiée est une généralisation du problème de satisfiabilité d'une formule booléenne. Les formules booléennes quantifiées sont utiles pour représenter par exemple des stratégies dans un jeu à deux joueurs mais dans de telles applications c'est une solution au problème de recherche associé qui est nécessaire. La plupart des procédures de décision récentes pour les formules booléennes quantifiées sont des extensions de la procédure de recherche dite de Davis-Putnam et peuvent être aisément étendues au problème de recherche. Ce n'est pas le cas pour les algorithmes basés sur l'élimination de quantificateurs. Dans cet article nous montrons comment des algorithmes d'élimination de quantificateurs peuvent être étendus pour le problème de recherche associé aux formules booléennes quantifiées

    Une (presque) génération automatique d'un compilateur de tables de vérité vers un solveur pour formules booléennes quantifiées prénexes

    No full text
    Cet article propose d'étendre la génération automatique d'un ensemble de règles de propagation booléenne quantifiée basée sur les littéraux à partir de la table de vérité d'un opérateur logique binaire à la génération automatique d'un compilateur prenant en entrée la table de vérité d'un opérateur logique binaire et offrant en sortie un solveur pour formules booléennes quantifiées prénexes non-FNC

    L'outil coupure pour les QCSP

    No full text
    Quantified Constraint Satisfaction Problems (QCSP) are a generalization of Constraint Satisfaction Problems (CSP) in which variables may be quantified existentially and universally.QCSP offers a natural framework to express PSPACE problems as finite two-player games or planning under uncertainty.State-of-the-art QCSP solvers have an important drawback: they explore much larger combinatorial spaces than the natural search space of the original problem since they are unable to recognize that some sub-problems are necessarily true.We introduce a new tool, inspired by the cut rule of Prolog as a tool under responsibility of the designer of the QCSP, to prune those parts of the search space which are by construction known to be useless.We use this new tool to restore on one hand the annihilator property of true for disjunction in QCSP solver and, on the other hand, to prune the search space in two-player games.It is a simple solution to use efficiently QCSP to design finite two-player games without restricting the QCSP language.This tool does not need to modify the QCSP solver but has only one requirement: be able to tell the QCSP solver that the current QCSP is solved.Our QCSP solver built over GECODE, a CSP library, obtained very good results compared to state-of-the-art QCSP solvers.&nbsp;Nous introduisons un nouvel outil permettant d'élaguer des branches satisfiables d'un arbre de recherche d'un QCSP. Cet outil permet, entre autres, de restaurer la propriété d'absorption du vrai par rapport à la disjonction dans les solveurs QCSP basés sur une conjonction de contraintes et un algorithme de recherche quantifié.Ce nouvel outil est inspiré de la coupure de Prolog comme un outil dont l'utilisation est laissée sous la responsabilité du concepteur du QCSP pour élaguer des parties de l'espace de recherche qui sont connues par construction inutiles à parcourir.Ce nouvel outil permet en particulier d'utiliser efficacement un QCSP pour spécifier des jeux à deux joueurs sans restreindre le langage QCSP.Notre solveur QCSP construit au-dessus de GECODE obtient d'excellents résultats vis-à-vis de l'état de l'art des solveurs QCSP.&nbsp;</p
    corecore